Not dead, just resting: The practical value of per publication citation indicators
نویسنده
چکیده
Citation counts do not measure research impact under any reasonable definition of impact. Instead they are indicators of research impact or quality in the sense that in some disciplines, when appropriately processed, they tend to correlate positively and statistically significantly with human judgements of impact or quality. Although we have theory to suggest that, at least in the sciences, citations could tend to reflect the contributions of papers to the onward progress of scholarship (Merton, 1973), there are many examples to demonstrate that this is not always the case (MacRoberts & MacRoberts, 1989, 1996, 2010; Seglen, 1997). Moreover, all citation-based indicators are often wrong and misleading and for some fields are completely useless (e.g., music and art in Table A3 of: HEFCE, 2015a). Nevertheless, as argued below, as long as they have a significant positive correlation with human judgements then they can serve a useful role supporting peer review (see also: van Raan, 1998). They may even be a primary source of evidence in situations where the expense of peer review in comparison to its benefits makes it impractical or for cases where peer judgements are thought to be too biased. Thus, citation-based indicators should not be used as a primary arbitrator of research impact unless there are practical reasons why better alternatives are inappropriate. Peer review is, in general, a better solution and is the one used in the UK to direct a large amount of UK government research funding (£1.6 billion per year in 2014-15: Wilsdon, Allen, Belfiore, Campbell, et al., 2015) with the remainder being primarily distributed through competitive project grant applications. The Abramo and D’Angelo (2016) argument for a particular type of indicator should be analysed within this context: not as an attempt to construct a perfect measure of research impact but as an attempt to construct the best possible indicator of research impact that is based on citations. When counting citations to collections of papers in order to compare two or more heterogeneous sets of papers, it is important to normalise the counts in order to make the comparisons fairer. For example, medical research is cited more frequently than information science research and older papers have had longer to attract citations. One way of reducing biases is to divide each article by the average number of citations for other articles of the same type and publication year so that scores above 1 suggest that a publication has had above average impact, irrespective of its field and publication year. The Mean Normalised Citation Score (MNCS) (Waltman, van Eck, van Leeuwen, Visser, & van Raan, 2011a,b) extends this to sets of documents by calculating the arithmetic mean of the normalised citation score for each publication so that an MNCS score above 1 indicates that the collection of papers has had an above average citation impact, irrespective of the range of publication years, fields and document types. There have been suggestions for fine tuning aspects of this calculation, such as by replacing the arithmetic mean with the geometric mean for its better handling of highly skewed data (Fairclough & Thelwall, 2015), or to report instead the proportion of articles in the most highly cited percentile (e.g., top 10% or top 1%) (Tijssen, Visser, & Van Leeuwen, 2002) but neither of these affect the main discussion here. This is because the main issue is whether indicators like these that are calculated on a per publication basis can be useful rather than which of them is the best. The MNCS and related indicators are used in applied scientometrics as evidence to aid peer review and to support policy evaluations and decisions. In this context, Abramo and D’Angelo (2016) argue that they are “not worthy of further use or attention”. The root problem is that averaging citation counts across exhaustive sets of articles to be compared can lead to misleading results, as they clearly demonstrate. For example, if there are ten sets of articles, each of which contains all of the publications of a given Dutch physics research group 2011-2015 and the purpose is to decide how much government funding should be given to each group during 2016-2020 then groups would be
منابع مشابه
تأخیر در انتشار مجلههای علمی: مطالعه نشریات مصوب وزارت علوم، تحقیقات و فناوری ایران
: Publication delay is a negative phenomenon in scientific information dissemination. The current research studies the publication delay of scientific journals accredited by the Ministry of Science, Research & Technology of Iran. It also investigates the association between journals’ characteristics and their publication lag. This study employs the applied research method. All 1156 journals of ...
متن کاملCommunity pharmacy-based research in Spain (1995-2005): A bibliometric study
UNLABELLED Only one study evaluated the scientific activity in community pharmacies in Spain, and it was restricted to articles published in just two journals. OBJECTIVE To assess the scientific activity in community pharmacies in Spain through a bibliometric analysis of the original papers published during the years 1995-2005. METHODS IPA, MEDLINE, CSIC database and the journals Seguimient...
متن کاملThe Diagnostic Value of End-tidal Carbon Dioxide (EtCO2) and Alveolar Dead Space (AVDS) in Patients with Suspected Pulmonary Thrombo-embolism (PTE)
Introduction: Capnography, is an easy, fast and practical method which its application in the diagnosis of Pulmonary Thromboendarterectomy (PTE) has recently been studied. This study aimed to assess the diagnostic value of end-tidal CO2 (ETCO2) and the alveolar dead space (AVDS) in the diagnosis of patients suspected to PTE who have been referred to the emergency department. Materials and Metho...
متن کاملمطالعه تطبیقی مراحل دانشنامه نگاری در دانشنامه جهان اسلام و دایرهالمعارف اسلام (چاپ لیدن)
Purpose: the present paper compares two reference books the Encyclopedia World of Islam & Encyclopedia of Islam– Leiden with regard to the whole process of compiling the encyclopedia, and to conduct an evaluative content analysis . Methodology: This is a comparative survey and a study of content analysis. The data collection tool in the comparative survey section is a questionnaire, and in...
متن کاملAre Mendeley reader counts high enough for research evaluations when articles are published?
Mike Thelwall, University of Wolverhampton, UK. Purpose –Mendeley reader counts have been proposed as early indicators for the impact of academic publications. In response, this article assesses whether there are enough Mendeley readers for research evaluation purposes during the month when an article is first published. Design/methodology/approach – Average Mendeley reader counts were compared...
متن کاملذخیره در منابع من
با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید
عنوان ژورنال:
- J. Informetrics
دوره 10 شماره
صفحات -
تاریخ انتشار 2016